Search Results: "craig"

5 April 2020

Craig Small: WordPress 5.4

Debian packages for WordPress version 5.4 will be uploaded shortly. I m just going through the install testing now. One problem I have noticed is, at least for my setup, there is an issue with network updates. The problem is that WordPress will ask me if I want to update the network sites, I say yes and get a SSL error. After lots of debugging, the problem is that the fsockopen option to use SNI is turned off for network updates. My sites need SNI so without this they just bomb out with a SSL handshake error. I m not sure what the real fix is, but my work-around was to temporary set the SNI in the fsockopen transport while doing the site updates. The file you want wp-includes/Requests/Transport/fsockopen.php and in the request method of Requests_Transport_fsockopen you ll see something like:
                       stream_context_set_option($context, array('ssl' => $context_options)); 
                  
                else   
                        $remote_socket = 'tcp://' . $host; 
                 
Just before the stream_context_set_option put the line:
                        $context_options['SNI_enabled'] = true;
Ugly but it works

17 November 2017

Craig Small: Short Delay with WordPress 4.9

You may have heard WordPress 4.9 is out. While this seems a good improvement over 4.8, it has a new editor that uses codemirror. So what s the problem? Well, inside codemirror is jshint and this has that idiotic no evil license. I think this was added in by WordPress, not codemirror itself. So basically WordPress 4.9 has a file, or actually a tiny part of a file that is non-free. I ll now have to delay the update of WordPress to hack that piece out, which probably means removing the javascript linter. Not ideal but that s the way things go.

17 October 2017

Russ Allbery: Bundle haul

Confession time: I started making these posts (eons ago) because a close friend did as well, and I enjoyed reading them. But the main reason why I continue is because the primary way I have to keep track of the books I've bought and avoid duplicates is, well, grep on these posts. I should come up with a non-bullshit way of doing this, but time to do more elegant things is in short supply, and, well, it's my blog. So I'm boring all of you who read this in various places with my internal bookkeeping. I do try to at least add a bit of commentary. This one will be more tedious than most since it includes five separate Humble Bundles, which increases the volume a lot. (I just realized I'd forgotten to record those purchases from the past several months.) First, the individual books I bought directly: Ilona Andrews Sweep in Peace (sff)
Ilona Andrews One Fell Sweep (sff)
Steven Brust Vallista (sff)
Nicky Drayden The Prey of Gods (sff)
Meg Elison The Book of the Unnamed Midwife (sff)
Pat Green Night Moves (nonfiction)
Ann Leckie Provenance (sff)
Seanan McGuire Once Broken Faith (sff)
Seanan McGuire The Brightest Fell (sff)
K. Arsenault Rivera The Tiger's Daughter (sff)
Matthew Walker Why We Sleep (nonfiction)
Some new books by favorite authors, a few new releases I heard good things about, and two (Night Moves and Why We Sleep) from references in on-line articles that impressed me. The books from security bundles (this is mostly work reading, assuming I'll get to any of it), including a blockchain bundle: Wil Allsop Unauthorised Access (nonfiction)
Ross Anderson Security Engineering (nonfiction)
Chris Anley, et al. The Shellcoder's Handbook (nonfiction)
Conrad Barsky & Chris Wilmer Bitcoin for the Befuddled (nonfiction)
Imran Bashir Mastering Blockchain (nonfiction)
Richard Bejtlich The Practice of Network Security (nonfiction)
Kariappa Bheemaiah The Blockchain Alternative (nonfiction)
Violet Blue Smart Girl's Guide to Privacy (nonfiction)
Richard Caetano Learning Bitcoin (nonfiction)
Nick Cano Game Hacking (nonfiction)
Bruce Dang, et al. Practical Reverse Engineering (nonfiction)
Chris Dannen Introducing Ethereum and Solidity (nonfiction)
Daniel Drescher Blockchain Basics (nonfiction)
Chris Eagle The IDA Pro Book, 2nd Edition (nonfiction)
Nikolay Elenkov Android Security Internals (nonfiction)
Jon Erickson Hacking, 2nd Edition (nonfiction)
Pedro Franco Understanding Bitcoin (nonfiction)
Christopher Hadnagy Social Engineering (nonfiction)
Peter N.M. Hansteen The Book of PF (nonfiction)
Brian Kelly The Bitcoin Big Bang (nonfiction)
David Kennedy, et al. Metasploit (nonfiction)
Manul Laphroaig (ed.) PoC GTFO (nonfiction)
Michael Hale Ligh, et al. The Art of Memory Forensics (nonfiction)
Michael Hale Ligh, et al. Malware Analyst's Cookbook (nonfiction)
Michael W. Lucas Absolute OpenBSD, 2nd Edition (nonfiction)
Bruce Nikkel Practical Forensic Imaging (nonfiction)
Sean-Philip Oriyano CEHv9 (nonfiction)
Kevin D. Mitnick The Art of Deception (nonfiction)
Narayan Prusty Building Blockchain Projects (nonfiction)
Prypto Bitcoin for Dummies (nonfiction)
Chris Sanders Practical Packet Analysis, 3rd Edition (nonfiction)
Bruce Schneier Applied Cryptography (nonfiction)
Adam Shostack Threat Modeling (nonfiction)
Craig Smith The Car Hacker's Handbook (nonfiction)
Dafydd Stuttard & Marcus Pinto The Web Application Hacker's Handbook (nonfiction)
Albert Szmigielski Bitcoin Essentials (nonfiction)
David Thiel iOS Application Security (nonfiction)
Georgia Weidman Penetration Testing (nonfiction)
Finally, the two SF bundles: Buzz Aldrin & John Barnes Encounter with Tiber (sff)
Poul Anderson Orion Shall Rise (sff)
Greg Bear The Forge of God (sff)
Octavia E. Butler Dawn (sff)
William C. Dietz Steelheart (sff)
J.L. Doty A Choice of Treasons (sff)
Harlan Ellison The City on the Edge of Forever (sff)
Toh Enjoe Self-Reference ENGINE (sff)
David Feintuch Midshipman's Hope (sff)
Alan Dean Foster Icerigger (sff)
Alan Dean Foster Mission to Moulokin (sff)
Alan Dean Foster The Deluge Drivers (sff)
Taiyo Fujii Orbital Cloud (sff)
Hideo Furukawa Belka, Why Don't You Bark? (sff)
Haikasoru (ed.) Saiensu Fikushon 2016 (sff anthology)
Joe Haldeman All My Sins Remembered (sff)
Jyouji Hayashi The Ouroboros Wave (sff)
Sergei Lukyanenko The Genome (sff)
Chohei Kambayashi Good Luck, Yukikaze (sff)
Chohei Kambayashi Yukikaze (sff)
Sakyo Komatsu Virus (sff)
Miyuki Miyabe The Book of Heroes (sff)
Kazuki Sakuraba Red Girls (sff)
Robert Silverberg Across a Billion Years (sff)
Allen Steele Orbital Decay (sff)
Bruce Sterling Schismatrix Plus (sff)
Michael Swanwick Vacuum Flowers (sff)
Yoshiki Tanaka Legend of the Galactic Heroes, Volume 1: Dawn (sff)
Yoshiki Tanaka Legend of the Galactic Heroes, Volume 2: Ambition (sff)
Yoshiki Tanaka Legend of the Galactic Heroes, Volume 3: Endurance (sff)
Tow Ubukata Mardock Scramble (sff)
Sayuri Ueda The Cage of Zeus (sff)
Sean Williams & Shane Dix Echoes of Earth (sff)
Hiroshi Yamamoto MM9 (sff)
Timothy Zahn Blackcollar (sff)
Phew. Okay, all caught up, and hopefully won't have to dump something like this again in the near future. Also, more books than I have any actual time to read, but what else is new.

12 June 2017

Craig Small: psmisc 23.0

I had to go check but it has been over 3 years since the last psmisc release back in February 2014. I really didn t think it had been that long ago. Anyhow, with no further delay, psmisc version 23.0 has been released today! Update: 23.1 is out now, removed some debug line out of killall and shipped two missing documents. This release is just a few feature update and minor bug fixes. The changelog lists them all, but these are the highlights. killall namespace filtering killall was not aware of namespaces, which meant if you wanted to kill all specified processes in the root namespace, it did that, but also all the child namespaces. So now it will only by default kill processes in its current PID namespace, and there is a new -n flag to specify 0 for all or a PID to use the namespace of. killall command name parsing This is similar to the bug sudo had where it didn t parse process names properly. A crafted process name meant killall missed it, even if you specified the username or tty. While I checked for procps having this problem (it didn t) I didn t check psmisc. Now killall and sudo use a similar parsing method as procps. New program: pslog Wanted to know what logs a process is writing to? pslog can help you here. It will report on what files in /var/log are opened by the specified process ID.
pslog 26475
Pid no 26475:
Log path: /opt/observium/logs/error_log
Log path: /var/log/apache2/other_vhosts_access.log
Log path: /opt/observium/logs/access_log
Finding psmisc psmisc will be available in your usual distributions shortly. The Debian packages are about to be uploaded and will be in the sid distribution soon. Other distributions I imagine will follow. For the source code, look in the GitLab repository or the Sourceforge file location.

1 June 2017

Markus Koschany: My Free Software Activities in May 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games Bug fixes New upstream release Debian Java Debian LTS This was my fifteenth month as a paid contributor and I have been paid to work 27,25 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: Misc Thanks for reading and see you next time.

31 May 2017

Craig Small: The sudo tty bug and procps

There have been recent reports of a security bug in sudo (CVE-2017-1000367) where you can fool sudo into thinking what controlling terminal it is running on to bypass its security checks. One of the first things I thought of was, is procps vulnerable to the same bug? Sure, it wouldn t be a security bypass, but it would be a normal sort of bug. A lot of programs in procps have a concept of a controlling terminal, or the TTY field for either viewing or filtering, could they be fooled into thinking the process had a different controlling terminal? Was I going to be in the same pickle as the sudo maintainers? The meat between the stat parsing sandwich? Can I find any more puns related somehow to the XKCD comic? TLDR: No. How to find the tty Most ways of finding what the controlling terminal for a process is on are very similar. The file /proc//stat is a one-liner pseudo file that the kernel creates on access that has information about the particular process. A typical file would look like:
20209 (bash) S 14762 20209 20209 34822 20209 4194304 32181 4846307 625 1602 66 3
0 16265 4547 20 0 1 0 139245105 25202688 1349 18446744073709551615 4194304 52421
32 140737059557984 0 0 0 0 3670020 1266777851 1 0 0 17 1 0 0 280 0 0 7341384 738
8228 39092224 140737059564618 140737059564628 140737059564628 140737059569646 0
The first field is the PID, the second the process name (which may be different than the command line, but that s another story), then skip along to field #7 which in this case is 34822. Also notice the process name is in brackets; that is important. So 34822, how do we figure out what device this is? The number is the major and minor device numbers of the controlling terminal. 38422 in hex is 8806, the device has a major number of 88h or 136 and a minor number of 06. Most programs just scan the usual device directories until they find a match (which is basically how procps does it). Device 136,6 is /dev/pts/6
crw--w---- 1 user tty 136, 6 May 29 16:20 /dev/pts/6
$ ps -o tty,cmd 20209
TT CMD
pts/6 /bin/bash
The Bug The process of taking the raw stat file and having a bunch of useful fields is called parsing. The bug in sudo was due to how they parsed the file. The stat file is a space-delimited file. The program scanned the file, character by character, until it came across the 6th space. The problem is, you can put spaces in your command and fool sudo. Once you know that, you can make sudo think the program is running on any (or at least a different) controlling terminal. The bug reporters then used some clever symlinking and race techniques to then get root. What about procps? The parsing of the current (as of writing) procps on the stat file is found in proc/readproc.c within the function stat2proc(). However, it is not just a simple sscanf or something that runs along the line looking for spaces. To find the command, the program does the following:
 S = strchr(S, '(') + 1;
 tmp = strrchr(S, ')');
 num = tmp - S;
 if(unlikely(num >= sizeof P->cmd)) num = sizeof P->cmd - 1;
 memcpy(P->cmd, S, num);
 P->cmd[num] = '\0';
 S = tmp + 2; // skip ") "
The sscanf then comes after we have found the command, using the variable S, to fill in the other fields including the controlling terminal device numbers. procps library looks for the command within brackets. So if your program has spaces in it, it is still found. By using a strrchr (effectively, find the last) you cannot fool it with a bracket in the command either. So procps is not vulnerable to this sort of trickery. Incidently, the fix for the sudo bug now uses strrchr for a close bracket, so it solves the problem the same way. The check for the close bracket appeared in procps 3.1.4 back in 2002, though the stat2proc function was warning about odd named processes before then. As it says in the 2002 change:
Reads /proc/*/stat files, being careful not to trip over processes with names like :-) 1 2 3 4 5 6 .
That s something we can all agree on!

16 February 2017

Craig Sanders: New D&D Cantrip

Name: Alternative Fact
Level: 0
School: EN
Time: 1 action
Range: global, contagious
Components: V, S, M (one racial, cultural or religious minority to blame)
Duration: Permanent (irrevocable)
Classes: Cleric, (Grand) Wizard, Con-man Politician The caster can tell any lie, no matter how absurd or outrageous (in fact, the more outrageous the better), and anyone hearing it (or hearing about it later) with an INT of 10 or less will believe it instantly, with no saving throw. They will defend their new belief to the death theirs or yours. This belief can not be disbelieved, nor can it be defeated by any form of education, logic, evidence, or reason. It is completely incurable. Dispel Magic does not work against it, and Remove Curse is also ineffectual. New D&D Cantrip is a post from: Errata

7 February 2017

Craig Small: WordPress 4.7.2

When WordPress originally announced their latest security update, there were three security fixes. While all security updates can be serious, they didn t seem too bad. Shortly after, they updated their announcement with a fourth and more serious security problem. I have looked after the Debian WordPress package for a while. This is the first time I have heard people actually having their sites hacked almost as soon as this vulnerability was announced. If you are running WordPress 4.7 or 4.7.1, your website is vulnerable and there are bots out there looking for it. You should immediately upgrade to 4.7.2 (or, if there is a later 4.7.x version to that). There is now updated Debian wordpress version 4.7.2 packages for unstable, testing and stable backports. For stable, you are on a patched version 4.1 which doesn t have this specific vulnerability (it was introduced in 4.7) but you should be using 4.1+dfsg-1+deb8u12 which has the fixes found in 4.7.1 ported back to 4.1 code.

12 October 2016

Craig Small: axdigi resurrected

Seems funny to talk about 20 year old code that was a stop-gap measure to provide a bridging function the kernel had not (as yet) got, but here it is, my old bridge code. When I first started getting involved in Free Software, I was also involved with hamradio. In 1994 I release my first Free Software, or Open Source program called axdigi. This program allowed you to digipeat . This was effectively source route bridging across hamradio packet networks. The code I used for this was originally network sniffer code to debug my PackeTwin kernel driver but got frustrated at there being no digipeating function within Linux, so I wrote axdigi which is about 200 lines. The funny thing is, back then I thought it would be a temporary solution until digipeating got put into the kernel, which it temporarily did then got removed. Recently some people asked me about axdigi and where there is an official place where the code lives. The answer is really the last axdigi was 0.02 written in July 1995. It seems strange to resurrect 20 year old code but it is still in use; though it does show its age. I ve done some quick work on getting rid of the compiler warnings but there is more to do. So now axdigi has a nice shiny new home on GitHub, at https://github.com/csmall/axdigi

11 October 2016

Craig Small: Changing Jabber IDs

I ve shuffled some domains around, using less of enc.com.au and more of my new domain dropbear.xyz The website should work with both, but the primary domain is dropbear.xyz Another change is my Jabber ID which used to be csmall at enc but now is same username at dropbear.xyz I think I have done all the required changes in prosody for it to work, even with a certbot certificate!

9 October 2016

Craig Sanders: Converting to a ZFS rootfs

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died. I ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it. NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I m betting on that getting merged before it becomes an issue for me. Here s the procedure I came up with: 1. Buy new disks, shutdown machine, install new disks, reboot. The details of this stage are unimportant, and the only thing to note is that I m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G at around $100 AUD each, they re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better. When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it s easiest to use the /dev/sd? device nodes. 2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk. I need: ZFS On Linux uses partition type bf08 ( Solaris Reserved 1 ) natively, but doesn t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 ( Solaris Reserved 2 ) and bf09 ( Solaris Reserved 3 ) for easy identification. I ll set these up later, once I ve got the system booted I don t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition. I used gdisk to interactively set up the partitions:
# gdisk -l /dev/sdp
GPT fdisk (gdisk) version 1.0.1
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdp: 537234768 sectors, 256.2 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 537234734
Partitions will be aligned on 8-sector boundaries
Total free space is 6 sectors (3.0 KiB)
Number  Start (sector)    End (sector)  Size       Code  Name
   1              40            2047   1004.0 KiB  EF02  BIOS boot partition
   2            2048         2099199   1024.0 MiB  EF00  EFI System
   3         2099200         6293503   2.0 GiB     8300  Linux filesystem
   4         6293504        14682111   4.0 GiB     8200  Linux swap
   5        14682112       455084031   210.0 GiB   BF07  Solaris Reserved 1
   6       455084032       459278335   2.0 GiB     BF08  Solaris Reserved 2
   7       459278336       537234734   37.2 GiB    BF09  Solaris Reserved 3
I then cloned the partition table to the other three SSDs with this little script: clone-partitions.sh
#! /bin/bash
src='sdp'
targets=( 'sdq' 'sdr' 'sds' )
for tgt in "$ targets[@] "; do
  sgdisk --replicate="/dev/$tgt" /dev/"$src"
  sgdisk --randomize-guids "/dev/$tgt"
done
3. Create the mdadm for /boot, the zpool, and and the root filesystem. Most rootfs on ZFS guides that I ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1. There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I ll use just $(hostname)/root for the rootfs. i.e. ganesh/root I wrote a script to automate it, figuring I d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines. create.sh
#! /bin/bash
exec &> ./create.log
hn="$(hostname -s)"
base='ata-Crucial_CT275MX300SSD1_'
md='/dev/md0'
md_part=3
md_parts=( $(/bin/ls -1 /dev/disk/by-id/$ base *-part$ md_part ) )
zfs_part=5
# 4 disks, so use the top half and bottom half for the two mirrors.
zmirror1=( $(/bin/ls -1 /dev/disk/by-id/$ base *-part$ zfs_part    head -n 2) )
zmirror2=( $(/bin/ls -1 /dev/disk/by-id/$ base *-part$ zfs_part    tail -n 2) )
# create /boot raid array
mdadm "$md" --create \
    --bitmap=internal \
    --raid-devices=4 \
    --level 1 \
    --metadata=0.90 \
    "$ md_parts[@] "
mkfs.ext4 "$md"
# create zpool
zpool create -o ashift=12 "$hn" \
    mirror "$ zmirror1[@] " \
    mirror "$ zmirror2[@] "
# create zfs rootfs
zfs set compression=on "$hn"
zfs set atime=off "$hn"
zfs create "$hn/root"
zpool set bootfs="$hn/root"
# mount the new /boot under the zfs root
mount "$md" "/$hn/root/boot"
If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you ve got the system up and running on ZFS. If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion. I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:
zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \
  -o primarycache=metadata ganesh/postgres
zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \
  -o primarycache=metadata ganesh/pg-xlog
followed by rsync and then started postgres again. 4. rsync my current system to it. Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven t booted into recovery/rescue/single-user mode, then you should be as close to it as possible everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway). Then:
hn="$(hostname -s)"
time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"
After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression. Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something. You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s I was kind of expecting closer to 800M/s but that s good enough .the Crucial MX300s aren t the fastest drive available (but they re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have. For real benchmarking, use bonnie++ or fio. 5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub. This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting: chroot.sh
#! /bin/sh
hn="$(hostname -s)"
for i in proc sys dev dev/pts ; do
  mount -o bind "/$i" "/$ hn /root/$i"
done
chroot "/$ hn /root"
Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:
/ganesh/root    /         zfs     defaults                                         0  0
/dev/md0        /boot     ext4    defaults,relatime,nodiratime,errors=remount-ro   0  2
I haven t bothered with setting up the swap at this point. That s trivial and I can do it after I ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven t done that :). add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that s:
GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"
NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs 6. Install grub Here s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into. But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:
/usr/sbin/grub-probe: error: failed to get canonical path of  /dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.
IMO, that s a bug in grub-probe it should look in /dev/disk/by-id/ if it can t find what it s looking for in /dev/ I fixed that problem with this script: fix-ata-links.sh
#! /bin/sh
cd /dev
ln -s /dev/disk/by-id/ata-Crucial* .
After that, update-grub works fine. NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you ll get that error every time you run update-grub in future. 7. Prepare to reboot Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to / umount-zfs-root.sh
#! /bin/sh
hn="$(hostname -s)"
md="/dev/md0"
for i in dev/pts dev sys proc ; do
  umount "/$ hn /root/$i"
done
umount "$md"
zfs umount "$ hn /root"
zfs umount "$ hn "
zfs set mountpoint=/ "$ hn /root"
zfs set canmount=off "$ hn "
8. Reboot Remember to configure the BIOS to boot from your new disks. The system should boot up with the new rootfs, no rescue disk required as in some other guides the rsync and chroot stuff has already been done. 9. Other notes 10. Useful references Reading these made it much easier to come up with my own method. Highly recommended. Converting to a ZFS rootfs is a post from: Errata

15 September 2016

Craig Sanders: Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24

It s Alive! The day before yesterday (at Infoxchange, a non-profit whose mission is Technology for Social Justice , where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it. Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing but it turned out that many programs would segfault e.g. it couldn t run bash, but sh (dash) was OK. I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn t yet run in jessie). After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23. Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6). In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven t actually tested installing jessie s libc6 on squeeze if it works, I expect it ll require a lot of extra stuff to be installed too. I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie. To build it, I had to use a system which hadn t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn t great, but it was OK and it worked. Docker has native support for ZFS, so that s what I m using on my real hardware. I started with the base wheezy image we re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:
APT::Default-Release "wheezy";
Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app s container again. I then installed the following:
apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev zlib1g-dev libssl-dev libpq-dev
To minimise the risk of incompatible updates, it s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them I added them after the first build failed but before I remembered to set Apt::Default::Release (OTOH, it s working OK now and we re probably better off with libssl-dev from jessie anyway). Once it built successfully, I exported the image to a tar file, copied it back to my real Docker machine (co-incidentally, the same machine with the docker VM installed) and imported it into docker there and tested it to make sure it didn t have the same segfault issues that the original wheezy image did. No problem, it worked perfectly. That worked, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I m adding to it, but I expect there ll be a few more yaks to shave before I m finished. When I finish what I m currently working on, I ll take a look at what needs to be done to get this app running on jessie. It s on the TODO list at work, but everyone else is too busy a perfect job for an unpaid volunteer. Wheezy s getting too old to keep using, and this frankenwheezy needs to float away on an iceberg. Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24 is a post from: Errata

30 August 2016

Dirk Eddelbuettel: RProtoBuf 0.4.5: now with protobuf v2 and v3!

A few short weeks after the 0.4.4 release of RProtoBuf, we are happy to announce a new version 0.4.5 which appeared on CRAN earlier today. RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects. This release brings support to the recently-release 'version 3' Protocol Buffers standard, used e.g. by the (very exciting) gRPC project (which was just released as version 1.0). RProtoBuf continues to supportv 'version 2' but now also cleanly support 'version 3'.
Changes in RProtoBuf version 0.4.5 (2016-08-29)
  • Support for version 3 of the Protcol Buffers API
  • Added 'syntax = "proto2";' to all proto files (PR #17)
  • Updated Travis CI script to test against both versions 2 and 3 using custom-built .deb packages of version 3 (PR #16)
  • Improved build system with support for custom CXXFLAGS (Craig Radcliffe in PR #15)
CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 August 2016

Craig Sanders: fakecloud

I wrote my first Mojolicious web app yesterday, a cloud-init meta-data server to enable running pre-built VM images (e.g. as provided by debian, ubuntu, etc) without having to install and manage a complete, full-featured cloud environment like openstack. I hacked up something similar several years ago when I was regularly building VM images at home for openstack at work, with just plain-text files served by apache, but that had pretty-much everything hard-coded. fakecloud does a lot more and allows per-VM customisation of user-data (using the IP address of the requesting host). Not bad for a day s hacking with a new web framework. https://github.com/craig-sanders/fakecloud fakecloud is a post from: Errata

9 August 2016

Reproducible builds folks: Reproducible builds: week 67 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday July 31 and Saturday August 6 2016: Toolchain development and fixes Packages fixed and bugs filed The following 24 packages have become reproducible - in our current test setup - due to changes in their build-dependencies: alglib aspcud boomaga fcl flute haskell-hopenpgp indigo italc kst ktexteditor libgroove libjson-rpc-cpp libqes luminance-hdr openscenegraph palabos petri-foo pgagent sisl srm-ifce vera++ visp x42-plugins zbackup The following packages have become reproducible after being fixed: The following newly-uploaded packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.) Some uploads have addressed some reproducibility issues, but not all of them: Patches submitted that have not made their way to the archive yet: Package reviews and QA These are reviews of reproduciblity issues of Debian packages. 276 package reviews have been added, 172 have been updated and 44 have been removed in this week. 7 FTBFS bugs have been reported by Chris Lamb. Reproducibility tools Test infrastructure For testing the impact of allowing variations of the buildpath (which up until now we required to be identical for reproducible rebuilds), Reiner Herrmann contribed a patch which enabled build path variations on testing/i386. This is possible now since dpkg 1.18.10 enables the --fixdebugpath build flag feature by default, which should result in reproducible builds (for C code) even with varying paths. So far we haven't had many results due to disturbances in our build network in the last days, but it seems this would mean roughly between 5-15% additional unreproducible packages - compared to what we see now. We'll keep you updated on the numbers (and problems with compilers and common frameworks) as we find them. lynxis continued work to test LEDE and OpenWrt on two different hosts, to include date variation in the tests. Mattia and Holger worked on the (mass) deployment scripts, so that the - for space reasons - only jenkins.debian.net GIT clone resides in ~jenkins-adm/ and not anymore in Holger's homedir, so that soon Mattia (and possibly others!) will be able to fully maintain this setup, while Holger is doing siesta. Miscellaneous Chris, dkg, h01ger and Ximin attended a Core Infrastricture Initiative summit meeting in New York City, to discuss and promote this Reproducible Builds project. The CII was set up in the wake of the Heartbleed SSL vulnerability to support software projects that are critical to the functioning of the internet. This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

10 July 2016

Craig Small: procps 3.3.12

The procps developers are happy to announce that version 3.3.12 of procps was released today. This version has a mixture of bug fixes and enhancements. This unfortunately means another API bump but we are hoping this will be fixed with the new library API coming soon. procps is developed on gitlab and the new version of procps can be found at https://gitlab.com/procps-ng/procps/tree/newlib procps 3.3.12 can be found at https://gitlab.com/procps-ng/procps/tags/v3.3.12 From the NEWS file, procps 3.1.12 has the following: We are hoping this will be the last one to use the old API and the new format API ( imaginatively called newlib ) will be used in subsequent releases. Feedback for this and any other version of procps can be sent to either the issue tracker or the development email list.

7 May 2016

Craig Small: Displaying Linux Memory

Memory management is hard, but RAM management may be even harder. Most people know the vague overall concept of how memory usage is displayed within Linux. You have your total memory which is everything inside the box; then there is used and free which is what the system is or is not using respectively. Some people might know that not all used is used and some of it actually is free. It can be very confusing to understand, even for a someone who maintains procps (the package that contains top and free, two programs that display memory usage). So, how does the memory display work? What free shows The free program is part of the procps package. It s central goal is to give a quick overview of how much memory is used where. A typical output (e.g. what I saw when I typed free -h ) could look like this:
      total   used    free   shared  buff/cache  available
Mem:    15G   3.7G    641M     222M         11G        11G
Swap:   15G   194M     15G
I ve used the -h option for human-readable output here for the sake of brevity and because I hate typing long lists of long numbers. People who have good memories (or old computers) may notice there is a missing -/+ buffers/cache line. This was intentionally removed in mid-2014 because as the memory management of Linux got more and more complicated, these lines became less relevant. These used to help with the not used used memory problem mentioned in the introduction but progress caught up with it. To explain what free is showing, you need to understand some of the underlying statistics that it works with. This isn t a lesson on how Linux its memory (the honest short answer is, I don t fully know) but just enough hopefully to understand what free is doing. Let s start with the two simple columns first; total and free. Total Memory This is what memory you have available to Linux. It is almost, but not quite, the amount of memory you put into a physical host or the amount of memory you allocate for a virtual one. Some memory you just can t have; either due to early reservations or devices shadowing the memory area. Unless you start mucking around with those settings or the virtual host, this number stays the same. Free Memory Memory that nobody at all is using. They haven t reserved it, haven t stashed it away for future use or even just, you know, actually using it. People often obsess about this statistic but its probably the most useless one to use for anything directly. I have even considered removing this column, or replacing it with available (see later what that is) because of the confusion this statistic causes. The reason for its uselessness is that Linux has memory management where it allocates memory it doesn t use. This decrements the free counter but it is not truly used . If you application needs that memory, it can be given back. A very important statistic to know for running a system is how much memory have I got left before I either run out or I start to serious swap stuff to swap drives. Despite its name, this statistic will not tell you that and will probably mislead you. My advice is unless you really understand the Linux memory statistics, ignore this one. Who s Using What Now we come to the components that are using (if that is the right word) the memory within a system. Shared Memory Shared memory is often thought of only in the context of processes (and makes working out how much memory a process uses tricky but that s another story) but the kernel has this as well. The shared column lists this, which is a direct report from the Shmem field in the meminfo file. Slabs For things used a lot within the kernel, it is inefficient to keep going to get small bits of memory here and there all the time. The kernel has this concept of slabs where it creates small caches for objects or in-kernel data strucutures that slabinfo(5) states [such as] buffer heads, inodes and dentries . So basically kernel stuff for the kernel to do kernelly things with. Slab memory comes in two flavours. There is reclaimable and unreclaimable. This is important because unreclaimable cannot be handed back if your system starts to run out of memory. Funny enough, not all reclaimable is, well, reclaimable. A good estimate is you ll only get 50% back, top and free ignore this inconvenient truth and assume it can be 100%. All of the reclaimable slab memory is considered part of the Cached statistic. Unreclaimable is memory that is part of Used. Page Cache and Cached Page caches are used to read and write to storage, such as a disk drive. These are the things that get written out when you use sync and make the second read of the same file much faster. An interesting quirk is that tmpfs is part of the page cache. So the Cached column may increase if you have a few of these. The Cached column may seem like it should only have Page Cache, but the Reclaimable part of the Slab is added to this value. For some older versions of some programs, they will have no or all Slab counted in Cached. Both of these versions are incorrect. Cached makes up part of the buff/cache column with the standard options for free or has a column to itself for the wide option. Buffers The second component to the buff/cache column (or separate with the wide option) is kernel buffers. These are the low-level I/O buffers inside the kernel. Generally they are small compared to the other components and can basically ignored or just considered part of the Cached, which is the default for free. Used Unlike most of the previous statistics that are either directly pulled out of the meminfo file or have some simple addition, the Used column is calculated and completely dependent on the other values. As such it is not telling the whole story here but it is reasonably OK estimate of used memory. Used component is what you have left of your Total memory once you have removed: Notice that the unreclaimable part of slab is not in this calculation, which means it is part of the used memory. Also note this seems a bit of a hack because as the memory management gets more complicated, the estimates used become less and less real. Available In early 2014, the kernel developers took pity on us toolset developers and gave us a much cleaner, simpler way to work out some of these values (or at least I d like to think that s why they did it). The available statistic is the right way to work out how much memory you have left. The commit message explains the gory details about it, but the great thing is that if they change their mind or add some new memory feature the available value should be changed as well. We don t have to worry about should all of slab be in Cached and are they part of Used or not, we have just a number directly out of meminfo. What does this mean for free? Poor old free is now at least 24 years old and it is based upon BSD and SunOS predecessors that go back way before then. People expect that their system tools don t change by default and show the same thing over and over. On the other side, Linux memory management has changed dramatically over those years. Maybe we re all just sheep (see I had to mention sheep or RAMs somewhere in this) and like things to remain the same always. Probably if free was written now; it would only need the total, available and used columns with used merely being total minus available. Possibly with some other columns for the wide option. The code itself (found in libprocps) is not very hard to maintain so its not like this change will same some time but for me I m unsure if free is giving the right and useful result for people that use it.

19 April 2016

Craig Sanders: Book Review: Trader s World by Charles Sheffield

One line review: Boys Own Dale Carnegie Post-Apocalyptic Adventures with casual racism and misogyny. That tells you everything you need to know about this book. I wasn t expecting much from it, but it was much worse than I anticipated. I m about half-way through it at the moment, and can t decide whether to just give up in disgust or keep reading in horrified fascination to see if it gets even worse (which is all that s kept me going with it so far). Book Review: Trader s World by Charles Sheffield is a post from: Errata

28 January 2016

Craig Small: pidof lost a shell

pidof is a program that reports the PID of a process that has the given command line. It has an option x which means scripts too . The idea behind this is if you have a shell script it will find it. Recently there was an issue raised saying pidof was not finding a shell script. Trying it out, pidof indeed could not find the sample script but found other scripts, what was going on? What is a script? Seems pretty simple really, a shell script is a text file that is interpreted by a shell program. At the top of the file you have a hash bang line which starts with #! and then the name of shell that is going to interpret the text. When you use the x option, the pidof uses the following code:
          if (task.cmd &&
                    !strncmp(task.cmd, cmd_arg1base, strlen(task.cmd)) &&
                    (!strcmp(program, cmd_arg1base)  
                    !strcmp(program_base, cmd_arg1)  
                    !strcmp(program, cmd_arg1)))
What this means if match if the process comm (task.cmd) and the basename (strip the path) of argv[1] match and one of the following: The Hash Bang Line Most scripts I have come across start with a line like
#!/bin/sh
Which means use the normal shell (on my system dash) shell interpreter. What was different in the test script had a first line of
#!/usr/bin/env sh
Which means run the program sh in a new environment. Could this be the difference? The first type of script has the following procfs files:
$ cat -e /proc/30132/cmdline
/bin/sh^@/tmp/pidofx^@
$ cut -f 2 -d' ' /proc/30132/stat
(pidofx)
The first line picks up argv[1] /tmp/pidofx while the second finds comm pidofx . The primary matching is satisfied as well as the first dot-point because the basename of argv[1] is pidofx . What about the script that uses env?
$ cat -e /proc/30232/cmdline
bash^@/tmp/pidofx^@
$ cut -f 2 -d' ' /proc/30232/stat
(bash)
The comm bash does not match the basename of argv[1] so this process is not found. How many execve? So the proc filesystem is reporting the scripts differently depending on the first line, but why? The fields change depending on what process is running and that is dependent on the execve function calls. A typical script has a single execve call, the strace output shows:
29332 execve("./pidofx", ["./pidofx"], [/* 24 vars */]) = 0
While the env has a few more:
29477 execve("./pidofx", ["./pidofx"], [/* 24 vars */]) = 0
 29477 execve("/usr/local/bin/sh", ["sh", "./pidofx"], [/* 24 vars */]) = -1 ENOENT (No such file or directory)
 29477 execve("/usr/bin/sh", ["sh", "./pidofx"], [/* 24 vars */]) = -1 ENOENT (No such file or directory)
 29477 execve("/bin/sh", ["sh", "./pidofx"], [/* 24 vars */]) = 0
The first execve is the same for both, but then env is called and it goes on its merry way to find sh. After trying /usr/local/bin, /usr/bin it finds sh in /bin and execs this program. Because of there are two successful execve calls, the procfs fields are different. What Next? So now the mystery of pidof missing scripts now has a reasonable reason. The problem is, how to fix pidof? There doesn t seem to be a fix that isn t a kludge. Hard-coding potential script names seems just evil but there doesn t seem to be a way to differentiate between a script using env and, say, vi ./pidofx . If you got some ideas, comment below or in the issue on gitlab.

20 January 2016

Craig Sanders: lm-sensors configs for Asus Sabertooth 990FX and M5A97 R2.0

I had to replace a motherboard and CPU a few days ago (bought an Asus M5A97 R2.0), and wanted to get lm-sensors working properly on it. Got it working eventually, which was harder than it should have been because the lm-sensors site is MIA, seems to have been rm -rf -ed. For anyone else with this motherboard, the config is included below. This inspired me to fix the config for my Asus Sabertooth 990FX motherboard. Also included below. To install, copy-paste to a file under /etc/sensors.d/ and run sensors -s to make sensors evaluate all of the set statemnents.
# Asus M5A97 R2.0
# based on Asus M5A97 PRO from http://blog.felipe.lessa.nom.br/?p=93
chip "k10temp-pci-00c3"
     label temp1 "CPU Temp (rel)"
chip "it8721-*"
     label  in0 "+12V"
     label  in1 "+5V"
     label  in2 "Vcore"
     label  in2 "+3.3V"
     ignore in4
     ignore in5
     ignore in6
     ignore in7
     ignore fan3
     compute in0  @ * (515/120), @ / (515/120)
     compute in1  @ * (215/120), @ / (215/120)
     label temp1 "CPU Temp"
     label temp2 "M/B Temp"
     set temp1_min 30
     set temp1_max 70
     set temp2_min 30
     set temp2_max 60
     label fan1 "CPU Fan"
     label fan2 "Chassis Fan"
     label fan3 "Power Fan"
     ignore temp3
     set in0_min  12 * 0.95
     set in0_max  12 * 1.05
     set in1_min  5 * 0.95
     set in1_max  5 * 1.05
     set in3_min  3.3 * 0.95
     set in3_max  3.3 * 1.05
     ignore intrusion0
#Asus Sabertooth 990FX
# modified from the version at http://www.spinics.net/lists/lm-sensors/msg43352.html
chip "it8721-isa-0290"
# Temperatures
    label temp1  "CPU Temp"
    label temp2  "M/B Temp"
    label temp3  "VCORE-1"
    label temp4  "VCORE-2"
    label temp5  "Northbridge"         # I put all these here as a reference since the
    label temp6  "DRAM"                # Asus Thermal Radar tool on my Windows box displays
    label temp7  "USB3.0-1"            # all of them.
    label temp8  "USB3.0-2"            # lm-sensors ignores all but the CPU and M/B temps.
    label temp9  "PCIE-1"              # If that is really what they are.
    label temp10 "PCIE-2"
    set temp1_min 0
    set temp1_max 70
    set temp2_min 0
    set temp2_max 60
    ignore temp3
# Fans
    label fan1 "CPU Fan"
    label fan2 "Chassis Fan 1"
    label fan3 "Chassis Fan 2"
    label fan4 "Chassis Fan 3"
#    label fan5 "Chassis Fan 4"      # lm-sensor complains about this
    ignore fan2
    ignore fan3
    set fan1_min 600
    set fan2_min 600
    set fan3_min 600
# Voltages
    label in0 "+12V"
    label in1 "+5V"
    label in2 "Vcore"
    label in3 "+3.3V"
    label in5 "VDDA"
    compute  in0  @ * (50/12), @ / (50/12)
    compute  in1  @ * (205/120), @ / (205/120)
    set in0_min  12 * 0.95
    set in0_max  12 * 1.05
    set in1_min  5 * 0.95
    set in1_max  5 * 1.05
    set in2_min  0.80
    set in2_max  1.6
    set in3_min  3.20
    set in3_max  3.6
    set in5_min  2.2
    set in5_max  2.8
    ignore in4
    ignore in6
    ignore in7
    ignore intrusion0
chip "k10temp-pci-00c3"
     label temp1 "CPU Temp"
lm-sensors configs for Asus Sabertooth 990FX and M5A97 R2.0 is a post from: Errata

Next.

Previous.